Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Syndrome decoding (SD), and equivalently Learning Parity with Noise (LPN), is a fundamental problem in cryptography, which states that for a field F, some compressing public matrix G ∈ F^k×n ,and a secret sparse vector e ∈ F^n sampled from some noise distribution, G e is indistinguishable from uniform. Recently, the SD has gained significant interest due to its use in pseudorandom correlation generators (PCGs). In pursuit of better efficiency, we propose a new assumption called Stationary Syndrome Decoding (SSD). In SSD,weconsider q correlated noise vectors e1,... , eq ∈ F^n and associated instances G1 e1,..., Gq eq where the noise vectors are restricted to having non-zeros in the same small subset of t positions L ⊂ [n]. That is, for all i ∈ L, ej,i is uniformly random, while for all other i, ej,i =0. Although naively reusing the noise vector renders SD and LPN insecure via simple Gaussian elimination, we observe known attacks do not extend to our correlated noise. We show SSD is unconditionally secure against so-called linear attacks, e.g., advanced information set decoding and representation techniques (Esser and Santini, Crypto 2024). We further adapt the state-of-the-art nonlinear attack (Briaud and Øygarden, Eurocrypt 2023) to SSD and demonstrate both theoretically and exper-imentally resistance to the attack. We apply SSD to PCGs to amortize the cost of noise generation pro-tocol. For OT and VOLE generation, each instance requires O(t)com-munication instead of O(t log n). For suggested parameters, we observe a1.5× improvement in the running time or between 6 and 18× reduc-tion in communication. For Beaver triple generation using Ring LPN, our techniques have the potential for substantial amortization due to the high concrete overhead of the Ring LPN noise generation.more » « lessFree, publicly-accessible full text available January 1, 2026
- 
            Secure multi-party computation (MPC) techniques can be used to provide data privacy when users query deep neural network (DNN) models hosted on a public cloud. State-of-the-art MPC techniques can be directly leveraged for DNN models that use simple activation functions (AFs) such as ReLU. However, these techniques are ineffective and/or inefficient for the complex and highly non-linear AFs used in cutting-edge DNN models. We present Compact, which produces piece-wise polynomial approximations of complex AFs to enable their efficient use with state-of-the-art MPC techniques. Compact neither requires nor imposes any restriction on model training and results in near-identical model accuracy. To achieve this, we design Compact with input density awareness, and use an application specific simulated annealing type optimization to generate computationally more efficient approximations of complex AFs. We extensively evaluate Compact on four different machine-learning tasks with DNN architectures that use popular complex AFs silu, gelu, and mish. Our experimental results show that Compact incurs negligible accuracy loss while being 2x-5x computationally more efficient than state-of-the-art approaches for DNN models with large number of hidden layers. Our work accelerates easy adoption of MPC techniques to provide user data privacy even when the queried DNN models consist of a number of hidden layers, and trained over complex AFs.more » « less
- 
            We study verifiable outsourcing of computation in a model where the verifier has black-box access to the function being computed. We introduce the problem of oracle-aided batch verification of computation (OBVC) for a function class $$\mathcal{F}$$. This allows a verifier to efficiently verify the correctness of any $$f \in \mathcal{F}$$ evaluated on a batch of $$n$$ instances $$x_1, \ldots, x_n$$, while only making $$\lambda$$ calls to an oracle for $$f$$ (along with $$O(n \lambda)$$ calls to low-complexity helper oracles), for security parameter $$\lambda$$. We obtain the following positive and negative results: - We build OBVC protocols for the class of all functions that admit {\em random-self-reductions}. Some of our protocols rely on homomorphic encryption schemes. - We show that there cannot exist OBVC schemes for the class of all functions mapping $$\lambda$$-bit inputs to $$\lambda$$-bit outputs, for any $$n = \mathsf{poly}(\lambda)$$.more » « less
- 
            Abstract In this work we demonstrate that allowing differentially private leakage can significantly improve the concrete performance of secure 2-party computation (2PC) protocols. Specifically, we focus on the private set intersection (PSI) protocol of Rindal and Rosulek (CCS 2017), which is the fastest PSI protocol with security against malicious participants. We show that if differentially private leakage is allowed, the cost of the protocol can be reduced by up to 63%, depending on the desired level of differential privacy. On the technical side, we introduce a security model for differentially-private leakage in malicious-secure 2PC. We also introduce two new and improved mechanisms for “differentially private histogram overestimates,” the main technical challenge for differentially-private PSI.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available